DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5
Hits 81 – 99 of 99

81
Time map phonology : finite state models and event logics in speech recognition
Carson-Berndsen, Julie. - Dordrecht [u.a.] : Kluwer, 1998
BLLDB
Institut für Empirische Sprachwissenschaft
UB Frankfurt Linguistik
Show details
82
Automatic Articulatory Annotation of Multi Sensor Database
In: 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings ; ISBN: 0-7803-3192-3 ; IEEE International Conference on Acoustics, Speech, and Signal Processing Conference (ICASSP 1996) ; https://hal.archives-ouvertes.fr/hal-03615575 ; IEEE International Conference on Acoustics, Speech, and Signal Processing Conference (ICASSP 1996), 1996, Atlanta, United States. pp.829--832, ⟨10.1109/ICASSP.1996.543249⟩ ; https://ieeexplore.ieee.org/document/543249 (1996)
BASE
Show details
83
Discourse goals and attentional processes in sentence production : the dynamic construal of events
In: Conceptual structure, discourse and language. - Stanford : CSLI Publications (1996), 149-161
BLLDB
Show details
84
Selective impairments of action naming : arguments and a case study
In: Linguistics and cognitive neuroscience. - Opladen : Westdt. Verl. (1994), 62-82
BLLDB
Show details
85
Mining Temporal Patterns of Movement for Video Content Classification Michael Fleischman Cognitive Machines Group
In: http://www.mit.edu/~mbf/MIR_06.pdf
BASE
Show details
86
Using Syntactic Dependencies and WordNet Classes for Noun Event Recognition
In: http://ceur-ws.org/Vol-902/paper_5.pdf
BASE
Show details
87
Motion events in language and cognition
In: http://xcelab.net/rm/wp-content/uploads/2008/10/motion_events.pdf
BASE
Show details
88
Video Content Classification
In: http://www.ismll.uni-hildesheim.de/lehre/semML-09s/script/p183-fleischman.pdf
BASE
Show details
89
Video Content Classification
In: http://web.media.mit.edu/~dkroy/papers/pdf/fleischman_decamp_2006.pdf
BASE
Show details
90
Mining Temporal Patterns of Movement for Video Event Recognition Michael Fleischman Cognitive Machines Group
In: http://www.media.mit.edu/cogmac/publications/MIR_06.pdf
BASE
Show details
91
Manuscript statistics: Words: 14,000
In: http://www.isc.cnrs.fr/dom/Dominey-AIJR5.pdf
BASE
Show details
92
ERP Evidence for an Interaction between Phonological and Semantic Processes in Masked Priming Tasks
In: http://www.ddl.ish-lyon.cnrs.fr/fulltext/Jacquier/Jacquier_2005_CogSci.pdf
BASE
Show details
93
ERP Evidence for an Interaction between Phonological and Semantic Processes in Masked Priming Tasks
In: http://www.psych.unito.it/csc/cogsci05/frame/poster/1/f718-jacquier.pdf
BASE
Show details
94
ERP Evidence for an Interaction between Phonological and Semantic Processes in Masked Priming Tasks
In: http://csjarchive.cogsci.rpi.edu/Proceedings/2005/docs/p1030.pdf
BASE
Show details
95
Invited Paper A Hierarchical Framework for Understanding Human-Human
In: http://www.cs.rochester.edu/u/spark/papers/Park_EI121_2005_invited.pdf
BASE
Show details
96
Processing Visual Words With Numbers: Electrophysiological Evidence for Semantic Activation
BASE
Show details
97
Electrophysiological Evidence of Different Loci for Case Mixing and Word Frequency Effects in Visual Word Recognition
BASE
Show details
98
Visual information constrains early and late stages of spoken-word recognition in sentence context
Abstract: Audiovisual speech perception has been frequently studied considering phoneme, syllable and word processing levels. Here, we examined the constraints that visual speech information might exert during the recognition of words embedded in a natural sentence context. We recorded event-related potentials (ERPs) to words that could be either strongly or weakly predictable on the basis of the prior semantic sentential context and, whose initial phoneme varied in the degree of visual saliency from lip movements. When the sentences were presented audio-visually (Experiment 1), words weakly predicted from semantic context elicited a larger long-lasting N400, compared to strongly predictable words. This semantic effect interacted with the degree of visual saliency over a late part of the N400. When comparing audio-visual versus auditory alone presentation (Experiment 2), the typical amplitude-reduction effect over the auditory-evoked N100 response was observed in the audiovisual modality. Interestingly, a specific benefit of high- versus low-visual saliency constraints occurred over the early N100 response and at the late N400 time window, confirming the result of Experiment 1. Taken together, our results indicate that the saliency of visual speech can exert an influence over both auditory processing and word recognition at relatively late stages, and thus suggest strong interactivity between audio-visual integration and other (arguably higher) stages of information processing during natural speech comprehension. ; This research was supported by the Spanish Ministry of Science and Innovation (PSI2010-15426 and Consolider INGENIO CSD2007-00012), Comissionat per a Universitats i Recerca del DIUE-Generalitat de Catalunya (SGR2009-092), and the European Research Council (StG-2010263145).
Keyword: Event-related potentials; Semantic constraints; Spoken-word recognition; Visual speech
URL: http://hdl.handle.net/10230/24964
https://doi.org/10.1016/j.ijpsycho.2013.06.016
BASE
Hide details
99
Unraveling the mystery about the negative valence bias: does arousal account for processing differences in unpleasant words?
BASE
Show details

Page: 1 2 3 4 5

Catalogues
2
0
2
0
0
1
0
Bibliographies
7
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
92
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern